Goto

Collaborating Authors

 test image


Adaptive Denoising via GainTuning

Neural Information Processing Systems

Deep convolutional neural networks (CNNs) for image denoising are typically trained on large datasets. These models achieve the current state of the art, but they do not generalize well to data that deviate from the training distribution. Recent work has shown that it is possible to train denoisers on a single noisy image. These models adapt to the features of the test image, but their performance is limited by the small amount of information used to train them. Here we propose GainTuning'', a methodology by which CNN models pre-trained on large datasets can be adaptively and selectively adjusted for individual test images. To avoid overfitting, GainTuning optimizes a single multiplicative scaling parameter (the "Gain") of each channel in the convolutional layers of the CNN. We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set. These adaptive improvements are even more substantial for test images differing systematically from the training data, either in noise level or image type.





UP-NeRF: Unconstrained Pose-Prior-Free Neural Radiance Fields (Supplement)

Neural Information Processing Systems

In this supplementary material, we provide additional implementation details (Appendix A) of our model and visualization of ablation studies (Appendix B) which are not included in our main paper. BARF-W, and BARF-WD are based on [2] because there is no official NeRF-W code available. The detailed architecture of UP-NeRF is shown in the Figure 1. First two authors have an equal contribution. As we mentioned in the main paper, the evaluation process entails two stages, which are test-time pose optimization and appearance optimization.